Search Results: "igor"

27 May 2017

Russ Allbery: On time management

Last December, the Guardian published a long essay by Oliver Burkeman entitled "Why time management is ruining our lives". Those who follow my book reviews know I read a lot of time management books, so of course I couldn't resist this. And, possibly surprisingly, not to disagree with it. It's an excellent essay, and well worth your time. Burkeman starts by talking about Inbox Zero:
If all this fervour seems extreme Inbox Zero was just a set of technical instructions for handling email, after all this was because email had become far more than a technical problem. It functioned as a kind of infinite to-do list, to which anyone on the planet could add anything at will.
This is, as Burkeman develops in the essay, an important critique of time management techniques in general, not just Inbox Zero: perhaps you can become moderately more efficient, but what are you becoming more efficient at doing, and why does it matter? If there were a finite amount of things that you had to accomplish, with leisure the reward at the end of the fixed task list, doing those things more efficiently makes perfect sense. But this is not the case in most modern life. Instead, we live in a world governed by Parkinson's Law: "Work expands to fill the time available for its completion." Worse, we live in a world where the typical employer takes Parkinson's Law, not as a statement on the nature of ever-expanding to-do lists, but a challenge to compress the time made available for a task to try to force the work to happen faster. Burkeman goes farther into the politics, pointing out that a cui bono analysis of time management suggests that we're all being played by capitalist employers. I wholeheartedly agree, but that's worth a separate discussion; for those who want to explore that angle, David Graeber's Debt and John Kenneth Galbraith's The Affluent Society are worth your time. What I want to write about here is why I still read (and recommend) time management literature, and how my thinking on it has changed. I started in the same place that most people probably do: I had a bunch of work to juggle, I felt I was making insufficient forward progress on it, and I felt my day contained a lot of slack that could be put to better use. The alluring promise of time management is that these problems can be resolved with more organization and some focus techniques. And there is a huge surge of energy that comes with adopting a new system and watching it work, since the good ones build psychological payoff into the tracking mechanism. Starting a new time management system is fun! Finishing things is fun! I then ran into the same problem that I think most people do: after that initial surge of enthusiasm, I had lists, systems, techniques, data on where my time was going, and a far more organized intake process. But I didn't feel more comfortable with how I was spending my time, I didn't have more leisure time, and I didn't feel happier. Often the opposite: time management systems will often force you to notice all the things you want to do and how slow your progress is towards accomplishing any of them. This is my fundamental disagreement with Getting Things Done (GTD): David Allen firmly believes that the act of recording everything that is nagging at you to be done relieves the brain of draining background processing loops and frees you to be more productive. He argues for this quite persuasively; as you can see from my review, I liked his book a great deal, and used his system for some time. But, at least for me, this does not work. Instead, having a complete list of goals towards which I am making slow or no progress is profoundly discouraging and depressing. The process of maintaining and dwelling on that list while watching it constantly grow was awful, quite a bit worse psychologically than having no time management system at all. Mark Forster is the time management author who speaks the best to me, and one of the points he makes is that time management is the wrong framing. You're not going to somehow generate more time, and you're usually not managing minutes and seconds. A better framing is task management, or commitment management: the goal of the system is to manage what you mentally commit to accomplishing, usually by restricting that list to something far shorter than you would come up with otherwise. How, in other words, to limit your focus to a small enough set of goals that you can make meaningful progress instead of thrashing. That, for me, is now the merit and appeal of time (or task) management systems: how do I sort through all the incoming noise, distractions, requests, desires, and compelling ideas that life throws at me and figure out which of them are worth investing time in? I also benefit from structuring that process for my peculiar psychology, in which backlogs I have to look at regularly are actively dangerous for my mental well-being. Left unchecked, I can turn even the most enjoyable hobby into an obligation and then into a source of guilt for not meeting the (entirely artificial) terms of the obligation I created, without even intending to. And here I think it has a purpose, but it's not the purpose that the time management industry is selling. If you think of time management as a way to get more things done and get more out of each moment, you're going to be disappointed (and you're probably also being taken advantage of by the people who benefit from unsustainable effort without real, unstructured leisure time). I practice Inbox Zero, but the point wasn't to be more efficient at processing my email. The point was to avoid the (for me) psychologically damaging backlog of messages while acting on the knowledge that 99% of email should go immediately into the trash with no further action. Email is an endless incoming stream of potential obligations or requests for my time (even just to read a longer message) that I should normallly reject. I also take the time to notice patterns of email that I never care about and then shut off the source or write filters to delete that email for me. I can then reserve my email time for moments of human connection, directly relevant information, or very interesting projects, and spend the time on those messages without guilt (or at least much less guilt) about ignoring everything else. Prioritization is extremely difficult, particularly once you realize that true prioritization is not about first and later, but about soon or never. The point of prioritization is not to choose what to do first, it's to choose the 5% of things that you going to do at all, convince yourself to be mentally okay with never doing the other 95% (and not lying to yourself about how there will be some future point when you'll magically have more time), and vigorously defend your focus and effort for that 5%. And, hopefully, wholeheartedly enjoy working on those things, without guilt or nagging that there's something else you should be doing instead. I still fail at this all the time. But I'm better than I used to be. For me, that mental shift was by far the hardest part. But once you've made that shift, I do think the time management world has a lot of tools and techniques to help you make more informed choices about the 5%, and to help you overcome procrastination and loss of focus on your real goals. Those real goals should include true unstructured leisure and "because I want to" projects. And hopefully, if you're in a financial position to do it, include working less on what other people want you to do and more on the things that delight you. Or at least making a well-informed strategic choice (for the sake of money or some other concrete and constantly re-evaluated reason) to sacrifice your personal goals for some temporary external ones.

20 February 2017

Russ Allbery: Haul via parents

My parents were cleaning out a bunch of books they didn't want, so I grabbed some of the ones that looked interesting. A rather wide variety of random stuff. Also, a few more snap purchases on the Kindle even though I've not been actually finishing books recently. (I do have two finished and waiting for me to write reviews, at least.) Who knows when, if ever, I'll read these. Mark Ames Going Postal (nonfiction)
Catherine Asaro The Misted Cliffs (sff)
Ambrose Bierce The Complete Short Stores of Ambrose Bierce (collection)
E. William Brown Perilous Waif (sff)
Joseph Campbell A Hero with a Thousand Faces (nonfiction)
Jacqueline Carey Miranda and Caliban (sff)
Noam Chomsky 9-11 (nonfiction)
Noam Chomsky The Common Good (nonfiction)
Robert X. Cringely Accidental Empires (nonfiction)
Neil Gaiman American Gods (sff)
Neil Gaiman Norse Mythology (sff)
Stephen Gillet World Building (nonfiction)
Donald Harstad Eleven Days (mystery)
Donald Harstad Known Dead (mystery)
Donald Harstad The Big Thaw (mystery)
James Hilton Lost Horizon (mainstream)
Spencer Johnson The Precious Present (nonfiction)
Michael Lerner The Politics of Meaning (nonfiction)
C.S. Lewis The Joyful Christian (nonfiction)
Grigori Medredev The Truth about Chernobyl (nonfiction)
Tom Nadeu Seven Lean Years (nonfiction)
Barak Obama The Audacity of Hope (nonfiction)
Ed Regis Great Mambo Chicken and the Transhuman Condition (nonfiction)
Fred Saberhagen Berserker: Blue Death (sff)
Al Sarrantonio (ed.) Redshift (sff anthology)
John Scalzi Fuzzy Nation (sff)
John Scalzi The End of All Things (sff)
Kristine Smith Rules of Conflict (sff)
Henry David Thoreau Civil Disobedience and Other Essays (nonfiction)
Alan W. Watts The Book (nonfiction)
Peter Whybrow A Mood Apart (nonfiction) I've already read (and reviewed) American Gods, but didn't own a copy of it, and that seemed like a good book to have a copy of. The Carey and Brown were snap purchases, and I picked up a couple more Scalzi books in a recent sale.

25 October 2016

Jaldhar Vyas: Aaargh gcc 5.x You Suck

Aaargh gcc 5.x You Suck I had to write a quick program today which is going to be run many thousands of times a day so it has to run fast. I decided to do it in c++ instead of the usual perl or javascript because it seemed appropriate and I've been playing around a lot with c++ lately trying to update my knowledge of its' modern features. So 200 LOC later I was almost done so I ran the program through valgrind a good habit I've been trying to instill. That's when I got a reminder of why I avoid c++.

==37698== HEAP SUMMARY:
==37698==     in use at exit: 72,704 bytes in 1 blocks
==37698==   total heap usage: 5 allocs, 4 frees, 84,655 bytes allocated
==37698== 
==37698== LEAK SUMMARY:
==37698==    definitely lost: 0 bytes in 0 blocks
==37698==    indirectly lost: 0 bytes in 0 blocks
==37698==      possibly lost: 0 bytes in 0 blocks
==37698==    still reachable: 72,704 bytes in 1 blocks
==37698==         suppressed: 0 bytes in 0 blocks
One of things I've learnt which I've been trying to apply more rigorously is to avoid manual memory management (news/deletes.) as much as possible in favor of modern c++ features such as std::unique_ptr etc. By my estimation there should only be three places in my code where memory is allocated and none of them should leak. Where do the others come from? And why is there a missing free (or delete.) Now the good news is that valgrind is saying that the memory is not technically leaking. It is still reachable at exit but that's ok because the OS will reclaim it. But this program will run a lot and I think it could still lead to problems over time such as memory fragmentation so I wanted to understand what was going on. Not to mention the bad aesthetics of it. My first assumption (one which has served me well over the years) was to assume that I had screwed up somewhere. Or perhaps it could some behind the scenes compiler magic. It turned out to be the latter -- sort of as I found out only after two hours of jiggling code in different ways and googling for clues. That's when I found this Stack Overflow question which suggests that it is either a valgrind or compiler bug. The answer specifically mentions gcc 5.1. I was using Ubuntu LTS which has gcc 5.4 so I have just gone ahead and assumed all 5.x versions of gcc have this problem. Sure enough, compiling the same program on Debian stable which has gcc 4.9 gave this...

==6045== 
==6045== HEAP SUMMARY:
==6045==     in use at exit: 0 bytes in 0 blocks
==6045==   total heap usage: 3 allocs, 3 frees, 10,967 bytes allocated
==6045== 
==6045== All heap blocks were freed -- no leaks are possible
==6045== 
...Much better. The executable was substantially smaller too. The time was not a total loss however. I learned that valgrind is pronounced val-grinned (it's from Norse mythology.) not val-grind as I had thought. So I have that going for me which is nice.

19 August 2016

Simon D saulniers: [GSOC] Final report




The Google Summer of Code is now over. It has been a great experience and I m very glad I ve been able to make it. I ve had the pleasure to contribute to a project showing very good promise for the future of communication: Ring. The words privacy and freedom in terms of technologies are being more and more present in the mind of people. All sorts of projects wanting to achieve these goals are coming to life each days like decentralized web networks (ZeroNet for e.g.), blockchain based applications, etc.

Debian I ve had the great opportunity to go to the Debian Conference 2016. I ve been introduced to the debian community and debian developpers ( dd in short :p). I was lucky to meet with great people like the president of the FSF, John Sullivan. You can have a look at my Debian conference report here. If you want to read my debian reports, you can do so by browsing the Google Summer Of Code category on this blog.

What I have done Ring is now in official debian repositories since June 30th. This is a good news for the GNU/Linux community. I m proud to say that I ve been able to contribute to debian by working on OpenDHT and developing new functionalities to reduce network traffic. The goal behind this was to finally optimize the data persistence traffic consumption on the DHT. Github repository: https://github.com/savoirfairelinux/opendht

Queries Issues:
  • #43: DHT queries
Pull requests:
  • #79: [DHT] Queries: remote values filtering
  • 93: dht: return consistent query from local storage
  • #106: [dht] rework get timings after queries in master

Value pagination Issues:
  • #71: [DHT] value pagination
Pull requests:
  • #110: dht: Value pagination using queries
  • #113: dht: value pagination fix

Indexation (feat. Nicolas Reynaud) Pull requests:
  • #77: pht: fix invalid comparison, inexact match lookup
  • #78: [PHT] Key consistency

General maintenance of OpenDHT Issues:
  • #72: Packaging issue for Python bindings with CMake: $DESTDIR not honored
  • #75: Different libraries built with Autotools and CMake
  • #87: OpenDHT does not build on armel
  • #92: [DhtScanner] doesn t compile on LLVM 7.0.2
  • #99: 0.6.2 filenames in 0.6.3
Pull requests:
  • #73: dht: consider IPv4 or IPv6 disconnected on operation done
  • #74: [packaging] support python installation with make DESTDIR=$DIR
  • #84: [dhtnode] user experience
  • #94: dht: make main store a vector>
  • #94: autotools: versionning consistent with CMake
  • #103: dht: fix sendListen loop bug
  • #106: dht: more accurate name for requested nodes count
  • #108: dht: unify bootstrapSearch and refill method using node cache

View by commits You can have a look at my work by commits just by clicking this link: https://github.com/savoirfairelinux/opendht/commits/master?author=sim590

What s left to be done

Data persistence The only thing left before achieving the totality of my work is to rigorously test the data persistence behavior to demonstrate the network traffic reduction. To do so we use our benchmark python module. We are able to analyse traffic and produce plots like this one:

Plot: 32 nodes, 1600 values with normal condition test.
This particular plot was drawn before the enhancements. We are confident to improve the results using my work produced during the GSOC.

TCP In the middle of the GSOC, we soon realized that passing from UDP to TCP would ask too much efforts in too short lapse of time. Also, it is not yet clear if we should really do that.

16 June 2016

John Goerzen: Mud, Airplanes, Arduino, and Fun

The last few weeks have been pretty hectic in their way, but I ve also had the chance to take some time off work to spend with family, which has been nice. Memorial Day: breakfast and mud For Memorial Day, I decided it would be nice to have a cookout for breakfast rather than for dinner. So we all went out to the fire ring. Jacob and Oliver helped gather kindling for the fire, while Laura chopped up some vegetables. Once we got a good fire going, I cooked some scrambled eggs in a cast iron skillet, mixed with meat and veggies. Mmm, that was tasty. Then we all just lingered outside. Jacob and Oliver enjoyed playing with the cats, and the swingset, and then . water. They put the hose over the slide and made a water slide (more mud slide maybe). IMG_7688 Then we got out the water balloon fillers they had gotten recently, and they loved filling up water balloons. All in all, we all just enjoyed the outdoors for hours. MVI_7738 Flying to Petit Jean, Arkansas Somehow, neither Laura nor I have ever really been to Arkansas. We figured it was about time. I had heard wonderful things about Petit Jean State Park from other pilots: it s rather unique in that it has a small airport right in the park, a feature left over from when Winthrop Rockefeller owned much of the mountain. And what a beautiful place it was! Dense forests with wonderful hiking trails, dotted with small streams, bubbling springs, and waterfalls all over; a nice lake, and a beautiful lodge to boot. Here was our view down into the valley at breakfast in the lodge one morning: IMG_7475 And here s a view of one of the trails: IMG_7576 The sunset views were pretty nice, too: IMG_7610 And finally, the plane we flew out in, parked all by itself on the ramp: IMG_20160522_171823 It was truly a relaxing, peaceful, re-invigorating place. Flying to Atchison Last weekend, Laura and I decided to fly to Atchison, KS. Atchison is one of the oldest cities in Kansas, and has quite a bit of history to show off. It was fun landing at the Amelia Earhart Memorial Airport in a little Cessna, and then going to three museums and finding lunch too. Of course, there is the Amelia Earhart Birthplace Museum, which is a beautifully-maintained old house along the banks of the Missouri River. IMG_20160611_134313 I was amused to find this hanging in the county historical society museum: IMG_20160611_153826 One fascinating find is a Regina Music Box, popular in the late 1800s and early 1900s. It operates under the same principles as those that you might see that are cylindrical. But I am particular impressed with the effort that would go into developing these discs in the pre-computer era, as of course the holes at the outer edge of the disc move faster than the inner ones. It would certainly take a lot of careful calculation to produce one of these. I found this one in the Cray House Museum: VID_20160611_151504 An Arduino Project with Jacob One day, Jacob and I got going with an Arduino project. He wanted flashing blue lights for his police station , so we disassembled our previous Arduino project, put a few things on the breadboard, I wrote some code, and there we go. Then he noticed an LCD in my Arduino kit. I hadn t ever gotten around to using it yet, and of course he wanted it immediately. So I looked up how to connect it, found an API reference, and dusted off my C skills (that was fun!) to program a scrolling message on it. Here is Jacob showing it off: VID_20160614_074802.mp4

3 May 2016

Russ Allbery: Review: The Effective Engineer

Review: The Effective Engineer, by Edmond Lau
Publisher: Effective Bookshelf
Copyright: 2015
ISBN: 0-9961281-0-7
Format: Trade paperback
Pages: 222
Silicon Valley start-up tech companies have a standard way of thinking about work. Large chunks of this come from Google, which pioneered a wide variety of new, or at least not-yet-mainstream, ways of organizing and thinking about work. The rest accreted through experience with fast-paced start-ups, engineer-focused companies, web delivery of products, and rabid turnover and high job mobility within a hothouse of fairly similar companies. A key part of this mindset is the firm belief that this atmosphere has created a better way to work, at least for software engineers (and systems administrators, although heaven forbid that one call them that any more): more effective, more efficient, more focused on what really matters. I think this is at least partly true, at least from the perspective of a software engineer. This Silicon Valley work structure focuses on data gathering, data-based decision-making, introspection, analysis, and continuous improvement, all of which I think are defensibly pointed in the right direction (if rarely as rigorous as one might want to believe). It absorbs bits and pieces of work organization techniques that are almost certainly improvements for the type of work software engineers do: Agile, Lean, continuous deployment, and fast iteration times. In other cases, though, I'm less convinced that this Silicon Valley consensus is objectively better as opposed to simply different; interviewing, for instance, is a puzzle that I don't think anyone has figured out, and the remarkable consensus in Silicon Valley on how to interview (basically, "like Google except for the bits we thought were obnoxious") feels more like a social fad than a sign of getting it right. But every industry has its culture of good ideas, bad ideas, fads, and fashion, and it's quite valuable to know that culture if you want to work in that industry. The Effective Engineer is a self-published book by Edmund Lau, a Silicon Valley software engineer who also drifted (as is so common in Silicon Valley) into mentoring, organizing, and speaking to other software engineers. Its purpose, per the subtitle, is to tell you "how to leverage your efforts in software engineering to make a disproportionate and meaningful impact." While that's not exactly wrong, and the book contains some useful and valuable tips, I'd tend to give it a slightly different subtitle: "a primer on how a Silicon Valley software engineer is expected to think about their work." This is a bit more practical, a bit less confident, and a bit less convinced of its own correctness than Lau might want to present his work, but it's just as valuable of a purpose if you want to work in the industry. (And is a bit more honest about its applicability outside of that industry.) What this book does extremely well is present, in a condensed, straightforward, and fast-moving form, most of the highlights of how start-ups and web-scale companies approach software engineering and the SWE role in companies (SWE, meaning software engineer, is another bit of Google terminology that's now nearly universal). If you've already worked in or around this industry for a while, you've probably picked up a lot of this via osmosis: prioritize based on impact and be unapologetic about letting other things drop, have a growth mindset, reprioritize regularly, increase your iteration speed, measure everything constantly, check your assumptions against data, derisk your estimates, use code review and automated testing (but not too much), automate operations, and invest heavily in hiring and onboarding. (The preceding list is a chapter list for this book.) If you're working at one of these sorts of companies, you're probably currently somewhere between nodding and rolling your eyes because no one at work will shut up about these topics. But if you've not worked inside one of these companies, even if you've done software engineering elsewhere, this is a great book to read to prepare yourself. You're going to hear about these ideas constantly, and, if it achieves nothing else at all, The Effective Engineer will give you a firm enough grounding in the lingo and mindset that you can have intelligent conversations with people who assume this is the only way to think about software engineering. By this point, you might be detecting a certain cynicism in this review. It's not entirely fair: a lot of these ideas are clearly good ones, and Lau does a good job of describing them quickly and coherently. It's a good job for what it is. But there are a couple of things that limited its appeal for me. First, it's definitely a primer. I read it after having worked at a web-scale start-up for a year and a half. There wasn't much in it that seemed particularly new, and it's somewhat superficial. The whole middle section in particular (build tools for yourself, measure everything, be data-driven) are topics for which the devil is often in the details. Lau gives you the terminology and the expected benefits, but putting any one of these techniques into practice could be a book (or several) by itself. Don't expect to come away from The Effective Engineer with much of a concrete plan for how to do these things in your day-to-day software development projects. But it's a good reminder to be thinking about, say, how to embed metrics and data-gathering hooks into the software you write. This is the nature of a primer; no 222-page book can get into much depth about the fractal complexity of doing good, fast, scalable software development. Second, there's a fundamental question raised by a book like this: effective at what? Lau tackles that in the first chapter with his focus on impact and leverage, and it's good advice as far as it goes. (Regular readers of my book reviews know that I love this sort of time management and prioritization discussion.) But measuring impact is a hard problem that requires a prioritization framework, and this is not really the book for this. The Effective Engineer is written primarily for software developers at start-ups, leaves the whole venture-capital start-up process as unquestioned background material, and accepts without comment the standard measures of value in that world: fast-deployed products, hypergrowth, racing competitors for perceived innovation, and finding ways to extract money. That's as deep into the question of impact as Lau gets: increases in company revenue. There's nothing wrong with this for the kind of book Lau intended to write, and it's not his fault that I find it unsatisfying. But don't expect The Effective Engineer to ask any hard questions about whether that's a meaningful definition of impact, or to talk much about less objective goals: quality of implementation, craftsmanship, giving back to a broader community via free software contributions, impact on the world in ways that can't be measured in market share, or anything else that is unlikely to lead to objective impact for company profits. At best he leaves a bit of wiggle room around using the concept of impact with different goals. If you're a new graduate who wants to work at Silicon-Valley-style start-ups, this is a great orientation, and likewise if you're coming from a different area of software development into that world. If you're not working in that industry, The Effective Engineer may still be moderately interesting, but it's not written for that audience and has little or nothing to say of the challenges of other types of businesses. But if you've already worked in the industry for a while, or if you're more interested in deeper discussions of goals and subjective values, you may not get much out of this. Rating: 7 out of 10

25 April 2016

Norbert Preining: G del and Daemons an excursion into literature

Explaining G del s theorems to students is a pain. Period. How can those poor creatures crank their mind around a Completeness and an Incompleteness Proof I understand that. But then, there are brave souls using G del s theorems to explain the world of demons to writers, in particular to answer the question:
You can control a Demon by knowing its True Name, but why?
goedel-glabrezu Very impressive. Found at worldbuilding.stackexchange.com, pointed to me by a good friend. I dare to full quote author Cort Ammon (nothing more is known), to preserve this masterpiece. Thanks!!!!
Use of their name forces them to be aware of the one truth they can never know. Tl/Dr: If demons seek permanent power but trust no one, they put themselves in a strange position where mathematical truisms paint them into a corner which leaves their soul small and frail holding all the strings. Use of their name suggests you might know how to tug at those strings and unravel them wholesale, from the inside out! Being a demon is tough work. If you think facing down a 4000lb Glabrezu without their name is difficult, try keeping that much muscle in shape in the gym! Never mind how many manicurists you go through keeping the claws in shape! I don t know how creative such demons truly are, but the easy route towards the perfect French tip that can withstand the rigors of going to the gym and benching ten thousand pounds is magic. Such a demon might learn a manicure spell from the nearby resident succubi. However, such spells are often temporary. No demon worth their salt is going to admit in front of a hero that they need a moment to refresh their mani before they can fight. The hero would just laugh at them. No, if a demon is going to do something, they re going to do it right, and permanently. Not just nice french tips with a clear lacquer over the top, but razor sharp claws that resharpen themselves if they are blunted and can extend or retract at will! In fact, come to think of it, why even go to the gym to maintain one s physique? Why not just cast a magic spell which permanently makes you into the glorious Hanz (or Franz) that the trainer keeps telling you is inside you, just waiting to break free. Just get the spell right once, and think of the savings you could have on gym memberships. Demons that wish to become more powerful, permanently, must be careful. If fairy tales have anything to teach is, it s that one of the most dangerous things you can do is wish for something forever, and have it granted. Forever is a very long time, and every spell has its price. The demon is going to have to make sure the price is not greater than the perks. It would be a real waste to have a manicure spell create the perfect claws, only to find that they come with a peculiar perchance to curve towards one s own heart in an attempt to free themselves from the demon that cast them. So we need proofs. We need proofs that each spell is a good idea, before we cast it. Then, once we cast it, we need proof that the spell actually worked intended. Otherwise, who knows if the next spell will layer on top perfectly or not. Mathematics to the rescue! The world of First Order Logic (FOL, or herefter simply logic ) is designed to offer these guarantees. With a few strokes of a pen, pencil, or even brush, it can write down a set of symbols which prove, without a shadow of a doubt, that not only will the spell work as intended, but that the side effects are manageable. How? So long as the demon can prove that they can cast a negation spell to undo their previous spell, the permanency can be reverted by the demon. With a few more fancy symbols, the demon can also prove that nobody else outside of the demon can undo their permanency. It s a simple thing for mathematics really. Mathematics has an amazing spell called reductio ad infinitum which does unbelievable things. However, there is a catch. There is always a catch with magic, even when that magic is being done through mathematics. In 1931, Kurt G del published his Incompleteness Theorems. These are 3 fascinating works of mathematical art which invoke the true names of First Order Logic and Set Theory. G del was able to prove that any system which is powerful enough to prove out all of algebra (1 + 1 = 2, 2 + 1 = 3, 3 * 5 = 15, etc.), could not prove its own validity. The self referential nature of proving itself crossed a line that First Order Logic simply could not return from. He proved that any system which tries must pick up one of these five traits: If the demon wants itself to be able to cancel the spell, his proof is going to have to include his own abilities, creating just the kind of self referential effects needed to invoke G del s incompleteness theorems. After a few thousand years, the demon may realize that this is folly. A fascinating solution the demon might choose is to explore the incomplete solution to G del s challenge. What if the demon permits the spell to change itself slightly, but in an unpredictable way. If the demon was a harddrive, perhaps he lets a single byte get changed by the spell in a way he cannot expect. This is actually enough to sidestep G del s work, by introducing incompleteness. However, now we have to deal with pesky laws of physic and magics. We can t just create something out of nothing, so if we re going to let the spell change a single byte of us, there must be a single byte of information, its dual, that is unleashed into the world. Trying to break such conservation laws opens up a whole can of worms. Better to let that little bit go free into the world. Well, almost. If you repeat this process a whole bunch of times, layering spells like a Matryoska doll, you re eventually left with a soul that is nothing but the leftover bits of your spells that you simply don t know enough about to use. If someone were collecting those bits and pieces, they might have the undoing of your entire self. You can t prove it, of course, but its possible that those pieces that you sent out into the world have the keys to undo your many layers of armor, and then you know they are the bits that can nullify your soul if they get there. So what do you do? You hide them. You cast your spells only on the darkest of nights, deep in a cave where no one can see you. If you need assistants, you make sure to ritualistically slaughter them all, lest one of them know your secret and whisper it to a bundle of reeds, The king has horns, if you are familiar with the old fairy tale. Make it as hard as possible for the secret to escape, and hope that it withers away to nothingness before someone discovers it, leaving you invincible. Now we come back to the name. The demon is going to have a name it uses to describe its whole self, including all of the layers of spellcraft it has acquired. This will be a great name like Abraxis, the Unbegotten Father or Satan, lord of the underworld. However, they also need to keep track of their smaller self, their soul. Failure to keep track of this might leave them open to an attack if they had missed a detail when casting their spells, and someone uncovered something to destroy them. This would be their true name, potentially something less pompous, like Gaylord Focker or Slartybartfarst. They would never use this name in company. Why draw attention to the only part of them that has the potential to be weak. So when the hero calls out for Slartybartfarst, the demon truly must pay attention. If they know the name the demon has given over the remains of their tattered soul, might they know how to undo the demon entirely? Fear would grip their inner self, like a child, having to once again consider that they might be mortal. Surely they would wish to destroy the hero that spoke the name, but any attempt runs the risk of falling into a trap and exposing a weakness (surely their mind is racing, trying to enumerate all possible weaknesses they have). It is surely better for them to play along with you, once you use their true name, until they understand you well enough to confidently destroy you without destroying themselves. So you ask for answers which are plausible. This one needs no magic at all. None of the rules are invalid in our world today. Granted finding a spell of perfect manicures might be difficult (believe me, some women have spent their whole life searching), but the rules are simply those of math. We can see this math in non-demonic parts of society as well. Consider encryption. An AES-256 key is so hard to brute force that it is currently believed it is impossible to break it without consuming 3/4 of the energy in the Milky Way Galaxy (no joke!). However, know the key, and decryption is easy. Worse, early implementations of AES took shortcuts. They actually left the signature of the path they took through the encryption in their accesses to memory. The caches on the CPU were like the reeds from the old fable. Merely observing how long it took to read data was sufficient to gather those reeds, make a flute, and play a song that unveils the encryption key (which is clearly either The king has horns or 1-2-3-4-5 depending on how secure you think your luggage combination is). Observing the true inner self of the AES encryption implementations was enough to completely dismantle them. Of course, not every implementation fell victim to this. You had to know the name of the implementation to determine which vulnerabilities it had, and how to strike at them. Or, more literally, consider the work of Alfred Whitehead, Principia Mathematica. Principia Mathematica was to be a proof that you could prove all of the truths in arithmetic using purely procedural means. In Principia Mathematica, there was no manipulation based on semantics, everything he did was based on syntax manipulating the actual symbols on the paper. G del s Incompleteness Theorem caught Principia Mathematica by the tail, proving that its own rules were sufficient to demonstrate that it could never accomplish its goals. Principia Mathematica went down as the greatest Tower of Babel of modern mathematical history. Whitehead is no longer remembered for his mathematical work. He actually left the field of mathematics shortly afterwards, and became a philosopher and peace advocate, making a new name for himself there. (by Cort Ammon)

27 October 2015

Mart n Ferrari: Tales from the SRE trenches: Dev vs Ops

This is the second part in a series of articles about SRE, based on the talk I gave in the Romanian Association for Better Software. On the first part, I introduced briefly what is SRE. Today, I present some concrete ways in which SRE tried to make things better, by stopping the war between developers and SysAdmins.

Dev vs Ops: the eternal battle So, it starts at looking at the problem: how to increase the reliability of the service? It turns out that some of the biggest sources of outages are new launches: a new feature that seemed innocuous somehow managed to bring the whole site down. Devs want to launch, and Ops want to have a quiet weekend, and this is were the struggle begins. When launches are problematic, bureaucracy is put in place to minimise the risks: launch reviews, checklists, long-lived canaries. This is followed by development teams finding ways of side-stepping those hurdles. Nobody is happy. One of the key aspects of SRE is to avoid this conflict completely, by changing the incentives, so these pressures between development and operations disappear. At Google, they achieve this with a few different strategies:

Have an SLA for your service Before any service can be supported by SRE, it has to be determined what is the level of availability that it must achieve to make the users and the company happy: this is called the Service Level Agreement (SLA). The SLA will define how availability is measured (for example, percentage of queries handled successfully in less than 50ms during the last quarter), and what is the minimum acceptable value for it (the Service Level Objective, or SLO). Note that this is a product decision, not a technical one. This number is very important for an SRE team and its relationship with the developers. It is not taken lightly, and it should be measured and enforced rigorously (more on that later). Only a few things on earth really require 100% availability (pacemakers, for example), and achieving really high availability is very costly. Most of us are dealing with more mundane affairs, and in the case of websites, there are many other things that fail pretty often: network glitches, OS freezes, browsers being slow, etc. So an SLO of 100% is almost never a good idea, and in most cases it is impossible to reach. In places like Google an SLO of "five nines" (99.999%) is not uncommon, and this means that the service can't fail completely for more than 5 minutes across a whole year!

Measure and report performance against SLA/SLO Once you have a defined SLA and SLO, it is very important that these are monitored accurately and reported constantly. If you wait for the end of the quarter to produce a hand-made report, the SLA is useless, as you only know you broke it when it is too late. You need automated and accurate monitoring of your service level, and this means that the SLA has to be concrete and actionable. Fuzzy requirements that can't be measured are just a waste of time. This is a very important tool for SRE, as it allows to see the progression of the service over time, detect capacity issues before they become outages, and at the same time show how much downtime can be taken without breaking the SLA. Which brings us to one core aspect of SRE:

Use error budgets and gate launches on them If SLO is the minimum rate of availability, then the result of calculating 1 - SLO is what fraction of the time a service can fail without failing out of the SLA. This is called an error budget, and you get to use it the way you want it. If the service is flaky (e.g. it fails consistently 1 of every 10000 requests), most of that budget is just wasted and you won't have any margin for launching riskier changes. On the other hand, a stable service that does not eat the budget away gives you the chance to bet part of it on releasing more often, and getting your new features quicker to the user. The moment the error budget is spent, no more launches are allowed until the average goes back out of the red. Once everyone can see how the service is performing against this agreed contract, many of the traditional sources of conflict between development and operations just disappear. If the service is working as intended, then SRE does not need to interfere on new feature launches: SRE trusts the developers' judgement. Instead of stopping a launch because it seems risky or under-tested, there are hard numbers that take the decisions for you. Traditionally, Devs get frustrated when they want to release, but Ops won't accept it. Ops thinks there will be problems, but it is difficult to back this feeling with hard data. This fuels resentment and distrust, and management is never pleased. Using error budgets based on already established SLAs means there is nobody to get upset at: SRE does not need to play bad cop, and SWE is free to innovate as much as they want, as long as things don't break. At the same time, this provides a strong incentive for developers to avoid risking their budget in poorly-prepared launches, to perform staged deployments, and to make sure the error budget is not wasted by recurrent issues.
That's all for today. The next article will continue delving into how traditional tensions between Devs and Ops are played in the SRE world. Comment

24 October 2015

Mart n Ferrari: Tales from the SRE trenches - Part 1

A few weeks ago, I was offered the opportunity to give a guest talk in the Romanian Association for Better Software. RABS is a group of people interested in improving the trade, and regularly hold events where invited speakers give presentations on a wide array of topics. The speakers are usually pretty high-profile, so this is quite a responsibility! To make things more interesting, much of the target audience works on enterprise software, under Windows platforms. Definitely outside my comfort zone! Considering all this, we decided the topic was going to be about Site Reliability Engineering (SRE), concentrating on some aspects of it which I believe could be useful independently of the kind of company you are working for. I finally gave the talk last Monday, and the audience seemed to enjoy it, so I am going to post here my notes, hopefully some other people will like it too.

Why should I care? I prepared this thinking of an audience of software engineers, so why would anyone want to hear about this idea that only seems to be about making the life of the operations people better? The thing is, having your work as a development team supported by an SRE team will also benefit you. This is not about empowering Ops to hit you harder when things blow apart, but to have a team that is your partner. A partner that will help you grow, handle the complexities of a production environment so you can concentrate on cool features, and that will get out of the way when things are running fine. A development team may seem to only care about adding features that will drive more and more users to your service. But an unreliable service is a service that loses users, so you should care about reliability. And what better to have a team has Reliability on their name?

What is SRE? SRE means Site Reliability Engineering, Reliability Engineering applied to "sites". Wikipedia defines Reliability Engineering as:
[..] engineering that emphasizes dependability in the life-cycle management of a product.
This is, historically, a branch of engineering that made possible to build devices that will work as expected even when their components were inherently unreliable. It focused on improving component reliability, establishing minimum requirements and expectations, and a heavy usage of statistics to predict failures and understand underlying problems. SRE started as a concept at Google about 12 years ago, when Ben Treynor joined the company and created the SRE team from a group of 7 production engineers. There is no good definition of what Site Reliability Engineering means; while the term and some of its ideas are clearly inspired in the more traditional RE, he defines SRE with these words1:
Fundamentally, it's what happens when you ask a software engineer to design an operations function.

Only hire coders After reading that quote it is not surprising that the first item in the SRE checklist2, is to only hire people who can code properly for SRE roles. Writing software is a key part of being SRE. But this does not mean that there is no separation between development and operations, nor that SRE is a fancy(er) name for DevOps3. It means treating operations as a software engineering problem, using software to solve problems that used to be solved by hand, implementing rigorous testing and code reviewing, and taking decisions based on data, not just hunches. It also implies that SREs can understand the product they are supporting, and that there is a common ground and respect between SREs and software engineers (SWEs). There are many things that make SRE what it is, some of these only make sense within a special kind of company like Google: many different development and operations teams, service growth that can't be matched by hiring, and more importantly, firm commitment from top management to implement these drastic rules. Therefore, my focus here is not to preach on how everybody should adopt SRE, but to extract some of the most useful ideas that can be applied in a wider array of situations. Nevertheless, I will first try to give an overview of how SRE works at Google.
That's it for today. In the next post I will talk about how to end the war between developers and SysAdmins. Stay tuned!

  1. http://www.site-reliability-engineering.info/2014/04/what-is-site-reliability-engineering.html
  2. SRE checklist extracted from Treynor's talk at SREcon14: https://www.usenix.org/conference/srecon14/technical-sessions/presentation/keys-sre
  3. By the way, I am still not sure what DevOps mean, it seems that everyone has a different definition for it.
Comment

8 May 2015

Gunnar Wolf: Guests in the Classroom: Felipe Esquivel (@felipeer) on the applications on parallelism, focusing on 3D animation

I love having guests give my classes :) This time, we had Felipe Esquivel, a good friend who had been once before invited by me to the Faculty, about two years ago. And it was due time to invite him again! Yes, this is the same Felipe I recently blogged about To give my blog some credibility, you can refer to Felipe's entry in IMDb and, of course, to the Indiegogo campaign page for Natura. Felipe knows his way around the different aspects of animation. For this class (2015-04-15), he explained how traditional ray-tracing techniques work, and showed clear evidences on the promises and limits of parallelism Relating back to my subject and to academic rigor, he clearly shows the speed with which we face Amdahl's Law, which limits the efficiency of parallelization at a certain degree perprogram construct, counterpointed against Gustafson's law, where our problem will be able to be solved in better detail given more processing abilities (and will thus not hit Amdahl's hard ceiling). A nice and entertaining talk. But I know you are looking for the videos! Get them, either at my server or at archive.org.

6 January 2015

Tiago Bortoletto Vaz: A few excerpts from The Invisible Committe's latest article

Just sharing some points from "2. War against all things smart!" and "4. Techniques against Technology" by The Invisible Committee's "Fuck off Google" article. You may want to get the "Fuck off Google" pdf and watch that recent talk at 31C3. "...predicts The New Digital Age, there will be people who resist adopting and using technology, people who want nothing to do with virtual profiles, online data systems or smart phones. Yet a government might suspect that people who opt out completely have something to hide and thus are more likely to break laws, and as a counterterrorism measure, that government will build the kind of hidden people registry we described earlier. If you don t have any registered social-networking profiles or mobile subscriptions, and on-line references to you are unusually hard to find, you might be considered a candidate for such a registry. You might also be subjected to a strict set of new regulations that includes rigorous airport screening or even travel restrictions. " I've been introduced to following observations about 5 years ago when reading "The Immaterial" by Andr Gorz. Now The Invisible Committee makes that even clearer in a very few words: "Technophilia and technophobia form a diabolical pair joined together by a central untruth: that such a thing as the technical exists. [...] Techniques can t be reduced to a collection of equivalent instruments any one of which Man, that generic being, could take up and use without his essence being affected." "[...] In this sense capitalism is essentially technological; it is the profitable organization of the most productive techniques into a system. Its cardinal figure is not the economist but the engineer. The engineer is the specialist in techniques and thus the chief expropriator of them, one who doesn t let himself be affected by any of them, and spreads his own absence from the world every where he can. He s a sad and servile figure. The solidarity between capitalism and socialism is confirmed there: in the cult of the engineer. It was engineers who drew up most of the models of the neoclassical economy like pieces of contemporary trading software."

14 November 2014

Debian Med: Bits from Debian Med team (by Andreas Tille)

New set of metapackagesThe version number of debian-med metapackages was bumped to 1.99 as a signal that we plan to release version 2.0 with Jessie. As usual the metapackages will be recreated shortly before the final release to include potential changes in the package pool. Feel free to install the metapackages med-* with the package installer of your choice. As always you can have a look at the packages in our focus by visiting our tasks pages. Please note that there may be new packages that aren t ready for release and that won t be installed by using the current metapackages. This is because we don t stop packaging software when the current testing is in freeze. Some support for Hospital Information SystemsThis release contains, for the first time some support for Hospital Information Systems (HIS) with the dependency fis-gtm of the med-his metapackage. This was made possible due to the work of Luis Ibanez (at kitware at the time when working on the packaging) and Amul Shah (fisglobal). Thanks to a fruitful cooperation between upstream FIS and Debian the build system of fis-gtm was adapted to enable an easier packaging. The availability of fis-gtm will simplify running Vista-foia on Debian systems and we are finally working on packaging Vista as well to make Debian fit for running inside hospitals. There was some interesting work done by Emilien Klein who was working hard to get GNUHealthpackaged. Emilien has given a detailed explanation on the Debian Med mailing list giving reasons why he removed the existing packages from the Debian package pool again. While this is a shame for GNUHealth users there might be an opportunity to revive this effort if there was better coordination between upstream and Tryton (which is the framework GNUHealth is based upon). In any case the packaging code in SVN as a useful resource to base private packages on. Feel free to contact us via the Debian Med mailing list if you consider creating GNUHealth Debian packages. Packages moved from non-free to mainThe Debian Med team worked hard to finally enable DFSG free licenses for PHYLIPand other package based on this tool. PHYLIP is well known in bioinformatics and actually one of the first packages in this field inside Debian (oldest changelog entry 28 Aug 1998). Since then it was considered non-free because its use was restricted to scientific / non-commercial use and also has the condition that you need to pay a fee to the University of Washington if you intend to use it commercially. Since Debian Med was started we were in continuous discussion with the author Joe Felsenstein. We even started an online petition to show how large the interest in a DFSG free PHYLIP might be. As a side note: This petition was *not* presented to the authors since they happily decided to move to a free license because of previous discussion and since they realised that the money they "gained" over they years was only minimal. The petition is mentioned here to demonstrate that it is possible to gather support to see positive changes implemented that benefit all users and that this approach can be used for similar cases. So finally PHYLIP was released in September under a BSD-2-clause license and in turn SeaView (a similarly famous program and also long term non-free citizen) depending on PHYLIP code was freed as well. There are several other tools like python-biopython and python-cogent which are calling PHYLIP if it exists. So not only is PHYLIP freed we can now stop removing those parts of the test suites of these other tools that are using PHYLIP. Thanks to all who participated in freeing PHYLIP specifically its author Joe Felsenstein. Autopkgtest in Debian Med packagesWe tried hard to add autopkgtests to all packages where some upstream test suite exists and we also tried to create some tests on our own. Since we consider testing of scientific software a very important feature this work was highly focused on for the Jessie release. When doing so we were able to drastically enhance the reliability of packages and found new formerly hidden dependency relations. Perhaps the hardest work was to run the full test suite of python-biopython which also has uncovered some hidden bugs in the upstream code on architectures that are not so frequently used in the field of bioinformatics. This was made possible by the very good support of upstream who were very helpful in solving the issues we reported. However, we are not at 100% coverage of autopkgtest and we will keep on working on our packages in the next release cycle for Jessie+1. General quality assuranceA general inspection of all Debian Med packages was done to check all packages which were uploaded before the Wheezy release and never touched since then. Those packages where checked for changed upstream locations which might have been hidden from uscan and in some cases new upstream releases were spotted by doing this investigation. Other old packages were re-uploaded conforming to current policy and packaging tools also polishing lintian issues. Publication with Debian Med involvementThe Debian Med team is involved in a paper which is in BioMed Central (in press). The title will be "Community-driven development for computational biology at Sprints, Hackathons and Codefests" Updated team metricsThe team metrics graphs on the Debian Med Blend entry page were updated. At the bottom you will find a 3D Bar chart of dependencies of selected metapackages over different versions. It shows our continuous work in several fields. Thanks to all Debian Med team members for their rigorous work on our common goal to make Debian the best operating system for medicine and biology. Please note that VCS stat calculation is currently broken and does not reflect the latest commits this year. Blends installable via d-i?In bug #758116 it is requested to list all Blends and thus also Debian Med in the initial tasksel selection. This would solve a long term open issue which was addessed more than eleven years ago (in #186085) in a more general and better way. This would add a frequently requested feature by our users who always wonder how to install Debian Med. While there is no final decision on bug #758116 and we are quite late with the request to get this implemented in Jessie feel free to contribute ideas so that this selection of Blends can be done in the best possible manner. Debian Med Bug Squashing Advent Calendar 2014The Debian Med team will again do the Bug Squashing Advent Calendar. Feel free to join us in our bug squashing effort where we close bugs while other people are opening doors. :-)

30 October 2014

Matthew Garrett: Hacker News metrics (first rough approach)

I'm not a huge fan of Hacker News[1]. My impression continues to be that it ends up promoting stories that align with the Silicon Valley narrative of meritocracy, technology will fix everything, regulation is the cancer killing agile startups, and discouraging stories that suggest that the world of technology is, broadly speaking, awful and we should all be ashamed of ourselves.

But as a good data-driven person[2], wouldn't it be nice to have numbers rather than just handwaving? In the absence of a good public dataset, I scraped Hacker Slide to get just over two months of data in the form of hourly snapshots of stories, their age, their score and their position. I then applied a trivial test:
  1. If the story is younger than any other story
  2. and the story has a higher score than that other story
  3. and the story has a worse ranking than that other story
  4. and at least one of these two stories is on the front page
then the story is considered to have been penalised.

(note: "penalised" can have several meanings. It may be due to explicit flagging, or it may be due to an automated system deciding that the story is controversial or appears to be supported by a voting ring. There may be other reasons. I haven't attempted to separate them, because for my purposes it doesn't matter. The algorithm is discussed here.)

Now, ideally I'd classify my dataset based on manual analysis and classification of stories, but I'm lazy (see [2]) and so just tried some keyword analysis:
KeywordPenalisedUnpenalised
Women134
Harass20
Female51
Intel23
x8634
ARM34
Airplane12
Startup4626

A few things to note:
  1. Lots of stories are penalised. Of the front page stories in my dataset, I count 3240 stories that have some kind of penalty applied, against 2848 that don't. The default seems to be that some kind of detection will kick in.
  2. Stories containing keywords that suggest they refer to issues around social justice appear more likely to be penalised than stories that refer to technical matters
  3. There are other topics that are also disproportionately likely to be penalised. That's interesting, but not really relevant - I'm not necessarily arguing that social issues are penalised out of an active desire to make them go away, merely that the existing ranking system tends to result in it happening anyway.

This clearly isn't an especially rigorous analysis, and in future I hope to do a better job. But for now the evidence appears consistent with my innate prejudice - the Hacker News ranking algorithm tends to penalise stories that address social issues. An interesting next step would be to attempt to infer whether the reasons for the penalties are similar between different categories of penalised stories[3], but I'm not sure how practical that is with the publicly available data.

(Raw data is here, penalised stories are here, unpenalised stories are here)


[1] Moving to San Francisco has resulted in it making more sense, but really that just makes me even more depressed.
[2] Ha ha like fuck my PhD's in biology
[3] Perhaps stories about startups tend to get penalised because of voter ring detection from people trying to promote their startup, while stories about social issues tend to get penalised because of controversy detection?

comment count unavailable comments

30 October 2013

Joey Hess: license monads

This could have been a blog post about toothbrushes for monkeys, which I seem to have dreamed about this morning. But then I was vaguely listening to the FAIFCast in the car, and I came up with something even more esoteric to write about! License monads would allow separating parts of code that are under different licenses, in a rigorous fashion. For example, we might have two functions in different license monads:
foo :: String -> GPL String
bar :: Char -> BSD Char
Perhaps foo needs to use bar to calculate its value. It can only do so if there's a way to lift code in the BSD monad into the GPL monad. Which we can legally write, since the BSD license is upwards-compatable with the GPL:
liftGPL :: BSD a -> GPL a
On the other hand, there should be no way provided to lift the GPL monad into the BSD monad. So bar cannot be written using code from foo, which would violate foo's GPL license. Perhaps the reason I am thinking about this is that the other day I found myself refactoring some code out of the git-annex webapp (which is AGPL licensed) and into the git-annex assistant (which is GPL licensed). Which meant I had to relicense that code. Luckily that was easy to do legally speaking, since I am the only author of the git-annex webapp so far, and own the whole license of it. (Actually, it's only around 3 thousand lines of code, and another thousand of html.) It also turned out to be easy to do the refactoring, technically speaking because looking at the code, I realized I had accidentially written it in the wrong monad; all the functions were in the webapp's Handler monad, but all of them used liftAnnex to actually do their work in the Annex monad. If that had not been the case, I would not have been able to refactor the code, at least not without entirely rewriting it. It's as if I had accidentially written:
foo :: String -> GPL String
foo = mapM (liftGPL . bar)
Which can be generalized to:
foo :: String -> BSD String
foo = mapM bar
I don't think that license monads can be realistically used in the current world, because lawyers and math often don't mix well. Lawyers have, after all, in the past written laws re-defining pi to be 3. Still, license monads are an interesting way to think about things for myself. They capture how my code and licenses are structured and allow me to reason about it on a more granular level than the licences of individual files. (I have a vague feeling someone else may have written about this idea before? Perhaps I should have been blogging about monkey toothbrushes after all...)

26 October 2013

Russ Allbery: Review: Why We Fail

Review: Why We Fail, by Victor Lombardi
Publisher: Rosenfeld Media
Copyright: 2013
ISBN: 1-933820-17-9
Format: Kindle
Pages: 248
Subtitled Learning from Experience Design Failures, Why We Fail is a case study book. It defines experience design, discusses the usefulness of analyzing failure, and then walks through ten marketplace failures attributed to failures of user experience (as opposed to engineering or user interface) and tries to draw some conclusions about those failures and how they could have been prevented. Lombardi then presents a more general (if fairly simple) framework for iterative experience design, along with some discussion of how that approach could have helped in these cases. Hardware and software designs can work properly but still fail with users and customers because the overall experience isn't right. Sometimes this is because something else got there first with more market share and people got used to it, sometimes it's because a system is too complicated, sometimes it's too simple, sometimes it's just uninspiring. Most of the news media coverage and many of the blog posts on this topic look to inspirational examples of successes. Lombardi points out here that successes can be accidental and failures are often more informative, a concept already common in many more mature areas of engineering. His hope, with Why We Fail, is to push a broader industry practice of taking apart and analyzing the failures. User experience design is still in its infancy, since many of the complex interactions now possible in modern hardware and software have just recently crossed into levels of complexity that qualify as experiences rather than interactions, and could benefit from more rigor. The concept sounds great. The approach in Why We Fail is at least interesting; Lombardi tells good stories, and I think they do tell some useful lessons. Drawing general principles is harder, which is always the weakness of case-study books. Lombardi's own attempt doesn't go far beyond a familiar mantra of the scientific method, small and responsive teams, cross-functional teams with flexibility to address a whole range of technical and business issues, and honest and forthright analysis of and iteration after failure. All of this is doubtless good advice; none of it should surprise anyone with even a smattering of engineering background or familiarity with current software design philosophy. The heart of the book is in the stories, and I did like that Lombardi captured a wide range of different designs. Included are embedded systems (BMW's iDrive), traditional software packages, web services of various types, and even a protocol (OAuth). Some were vaguely familiar and some were not familiar at all, although I don't follow UI and UX discussions to any large extent. In each case, Lombardi gives a short but effective introduction to the problem space and the product, walks through the design choices, and then talks about the failure. Usefully, he doesn't stop with the failure but continues the story through the reaction of the company to the failure and any subsequent actions they took, which in some cases contain their own useful lessons. All of this, I should note, is a bit far afield of my own area of expertise (architectural building blocks and sub-surface components without as much direct interaction with the user), so I'm not the best person to judge the quality and originality of the analysis. But I liked Lombardi's approach and his analysis of multiple possible failure causes, and all of his conclusions seemed quite plausible and sensible given the case studies as described. I'm best-qualified to judge the chapter on OAuth, since I've worked extensively on web authentication systems, and his analysis of the challenges and user experiences in that case closely match my own. I didn't notice any significant errors, which is hopefully a good sign for the rest of the book. As one might expect from a short book about a complex field aimed mostly at starting a conversation, there isn't much here that's earth-shattering, nor are there simple extractable principles that will make your experience designs better. As always, the takeaway boils down to "invest time and resources into this and try to think about it systematically," which of course is true of every possible axis of improvement for any product. (The challenge is usually where to spend those limited resources.) I will also note in passing that Lombardi assumes an entirely market-driven metric for success, and all of the case studies are exclusively commercial not unexpected, since that's where most of the money and a lot of the resources are, but not that helpful for bespoke work where the nature of the problem is subtlely different. But, despite the expected shortcomings and the inherent weakness of the case study approach to analysis, which tries to pluralize anecdote to get data I enjoyed this book. I also got at least one useful trick out of it: the premortem, to look for potential failure points in the experience before rolling out, or even designing, the product. It's a fairly light-weight book, but I think it's moderately inspirational, and I wholeheartedly concur with Lombardi on the merits of failure analysis as an engineering approach. Rating: 7 out of 10

14 October 2013

Charles Plessy: Update of EMBOSS explorer in Wheezy.

EMBOSS explorer was broken in Debian 7 (Wheezy) because of an incompatibly with EMBOSS 6.4. The package was repaired with the second update (7.2). The development and maintenance of EMBOSS explorer have stopped for many years. If a new serious bug surfaces, we may need to remove the package rather than repair it. In consequence, do not hesitate to suggest us an alternative, or if you are developer and need EMBOSS explorer, to see how you can reinvigorate this project (currently on SourceForge).

29 September 2013

Dirk Eddelbuettel: Rcpp 0.10.5

A new version of Rcpp is now on the CRAN network for GNU R; binaries for Debian have been uploaded as well. Once more, this release brings a large number of exciting changes to Rcpp. Some concern usability, some bring new features, some increase performance; see below for the detailed list. We have now released three updates on a quarterly cycle; if we keep this up the next version ought to be ready at the end of December. As in the past, we tested the release rather rigorously by checking against all packages I could (relatively easily) built on my server: this time it successfully passed \code R CMD check for all 107 packages I can build locally out of a total of 136 packages. (Two failed: one for an error in \code Makevars , and one for the need of an X11 server during tests; this may get addressed in the in test script next time). As all of these 107 packages passed, we do not expect any issues with dependent packages. Should there be issues we would appreciate a note, preferably with reproducible code, to the rcpp-devel mailing list. The complete NEWS entry for 0.10.4 is below; more details are in the ChangeLog file in the package and on the Rcpp Changelog page.
Changes in Rcpp version 0.10.5 (2013-09-28)
  • Changes in R code:
    • New R function demangle that calls the DEMANGLE macro.
    • New R function sizeof to query the byte size of a type. This returns an object of S3 class bytes that has a print method showing bytes and bits.
  • Changes in Rcpp API:
    • Add defined(__sun) to lists of operating systems to test for when checking for lack of backtrace() needed for stack traces.
    • as<T*>, as<const T*>, as<T&> and as<const T&> are now supported, when T is a class exposed by modules, i.e. with RCPP_EXPOSED_CLASS
    • DoubleVector as been added as an alias to NumericVector
    • New template function is<T> to identify if an R object can be seen as a T. For example is<DataFrame>(x). This is a building block for more expressive dispatch in various places (modules and attributes functions).
    • wrap can now handle more types, i.e. types that iterate over std::pair<const KEY, VALUE> where KEY can be converted to a String and VALUE is either a primitive type (int, double) or a type that wraps. Examples :
      • std::map<int, double> : we can make a String from an int, and double is primitive
      • boost::unordered_map<double, std::vector<double> >: we can make a String from a double and std::vector<double> can wrap itself
      Other examples of this are included at the end of the wrap unit test file (runit.wrap.R and wrap.cpp).
    • wrap now handles containers of classes handled by modules. e.g. if you expose a class Foo via modules, then you can wrap vector<Foo>, ... An example is included in the wrap unit test file.
    • RcppLdFlags(), often used in Makevars files of packages using Rcpp, is now exported from the package namespace.
  • Changes in Attributes:
    • Objects exported by a module (i.e. by a RCPP_MODULE call in a file that is processed by sourceCpp) are now directly available in the environment. We used to make the module object available, which was less useful.
    • A plugin for openmp has been added to support use of OpenMP.
    • Rcpp::export now takes advantage of the more flexible as<>, handling constness and referenceness of the input types. For users, it means that for the parameters of function exported by modules, we can now use references, pointers and const versions of them. The file Module.cpp file has an example.
    • No longer call non-exported functions from the tools package
    • No longer search the inline package as a fallback when loading plugins for the the Rcpp::plugins attribute.
  • Changes in Modules:
    • We can now expose functions and methods that take T& or const T& as arguments. In these situations objects are no longer copied as they used to be.
  • Changes in sugar:
    • is_na supports classes DatetimeVector and DateVector
  • Changes in Rcpp documentation:
    • The vignettes have been moved from inst/doc/ to the vignettes directory which is now preferred.
    • The appearance of the vignettes has been refreshed by switching to the Bistream Charter font, and microtype package.
  • Deprecation of RCPP_FUNCTION_*:
    • The macros from the preprocessor_generated.h file have been deprecated. They are still available, but they print a message in addition to their expected behavior.
    • The macros will be permanently removed in the first Rcpp release after July 2014.
    • Users of these macros should start replacing them with more up-to-date code, such as using 'Rcpp attributes' or 'Rcpp modules'.
Thanks to CRANberries, you can also look at a diff to the previous release 0.10.4. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page

Joachim Breitner: Heidelberg Laureates Forum 2013

During the last week I was attending the first Heidelberg Laureates Forum as one of the lucky 200 accepted young scientists. The HFL is a (from now on hopefully yearly) event that brings together Fields Medalists, Abel Prize laureates and Turing Award winners with young scientists (undergraduates, Ph.D. students and postdocs) from both fields in the city of Heidelberg. The extremely well organized week consisted of lectures from the laureates, some workshops held by postdocs, excursions and plenty of good food. Videos of the lectures are available (but don t work on Linux, at least not for me ), and I have shot a few pictures of the event as well. I believe that my favourite lectures where Michael Atiyah s Advice to a Young Mathematician , Vladimir Voevodsky s Univalent Foundations of Mathematics , William Morton Kahan s Desperately Needed Remedies for the Undebuggability of Large-Scale Floating-Point Computations in Science and Engineering and Alan Kay s Putting Turing to Work . Where are all the functional programmers? During that event, one gets to talk to many other math and computer scientists researchers; sometimes just Where are you from? and What do you do? , sometimes long discussions. Unfortunately, I hardly found one who is into functional programming language research is that only because the event was parallel to ICFP (which I really would have liked to attend as well), or is functional programming really just about 1% of all computer science? What is a proof? My other research interest lies in interactive theorem proving, especially using Isabelle. Of course that is a topic that one can discuss with almost everyone at that event, including the mathematicians. The reactions were rather mixed: On the one end of the spectrum, some mathematicians seriously doubt that they would ever trust a computer to check proofs and that it would ever be efficient enough to use. Others would not mind having a button that tells whether their paper written in LaTeX is correct , but were not keen to invest time or thought into making the proof readable by the computer. And then there were some (but very few!) who had not heard of theorem proving before and were very excited by the prospect of being able to obtain certainty about their proofs immediately and without having to bother other scientists with it. During the mathematician s panel discussions, where I posed the question Do you see value in or even a need for machine-checked proofs in mathematics. , Efim Zelmanov (Fields Medal 1994) said a proof is what other mathematicians see as a proof . I found this attitude a bit surprising for me, a proof has always been a rigorous derivation within a formal system (say, ZFC set theory), and what we write in papers is a (less formal) description of the actual proof, whose existence we believe in. Therefore I was very happy to see Vladimir Voevodsky give a very committed talk about Univalent Foundations and how using that as the language for mathematics will allow more of mathematics to be cast in a formal, machine-checked form. I got the chance to discuss this with him in person, as I wanted to hear his option on Isabelle, and especially on the usefulness of the style of structured proof that Isar provides, and which is closer to the style of proofs that mathematicians use in papers. He said that he enjoyed writing his proof in the style required in Type Theory and in Coq, and that maybe mathematicians should and will adjust to the language of the system, while I believe that a structured proof languages like Isar, independent of the underlying logic (HOL in this case; which is insufficient to form a base for all of abstract mathematics), is a very useful feature and that proof assistants should adjust to the mathematicians.
We also briefly discussed the idea of mine to work with theorem provers already with motivated students in high schools, e.g. in math clubs, and found that simple proofs about arithmetic of natural numbers could be feasible here, without being too trivial. All in all a very rewarding and special week, and I can only recommend to try to attend one of the next forums, if possible.

25 June 2013

Dirk Eddelbuettel: Rcpp 0.10.4

A new version of Rcpp is now on the CRAN network for GNU R; binaries for Debian have been uploaded as well. This release brings a fairly large number of fixes and improvements across a number of Rcpp features, see below for the detailed list. We are also announcing with this release that we plan to phase out the RCPP_FUNCTION_* macros. Not only have they been superceded by Rcpp Modules and Rcpp Atributes (each of which has its own pdf vignette in the Rcpp package), but they also appear to be at best lightly used. We are for example not aware of any CRAN packages deploying them. To provide a smooth transition, we are aiming to keep them around for another twelve months, but plan to remove them with the first release after that time window has passed. As before, we tested the release rather rigorously by checking against all packages I could (relatively easily) built on my server: this time it covered 91 of the 124 CRAN packages depending on Rcpp. As all of these 91 packages passed their checks, we do not expect any issues with dependent packages. The complete NEWS entry for 0.10.4 is below; more details are in the ChangeLog file in the package and on the Rcpp Changelog page.
Changes in Rcpp version 0.10.4 (2013-06-23)
  • Changes in R code: None beyond those detailed for Rcpp Attributes
  • Changes in Rcpp attributes:
    • Fixed problem whereby the interaction between the gc and the RNGScope destructor could cause a crash.
    • Don't include package header file in generated C++ interface header files.
    • Lookup plugins in inline package if they aren't found within the Rcpp package.
    • Disallow compilation for files that don't have extensions supported by R CMD SHLIB
  • Changes in Rcpp API:
    • The DataFrame::create set of functions has been reworked to just use List::create and feed to the DataFrame constructor
    • The operator-() semantics for Date and Datetime are now more inline with standard C++ behaviour; with thanks to Robin Girard for the report.
    • RNGScope counter now uses unsigned long rather than int.
    • Vector<*>::erase(iterator, iterator) was fixed. Now it does not remove the element pointed by last (similar to what is done on stl types and what was intended initially). Reported on Rcpp-devel by Toni Giorgino.
    • Added equality operator between elements of CharacterVectors.
  • Changes in Rcpp sugar:
  • Changes in Rcpp build tools:
    • Fix by Martyn Plummer for Solaris in handling of SingleLogicalResult.
    • The src/Makevars file can now optionally override the path for /usr/bin/install_name_tool which is used on OS X.
    • Vignettes are trying harder not to be built in parallel.
  • Changes in Rcpp documentation:
    • Updated the bibliography in Rcpp.bib (which is also sourced by packages using Rcpp).
    • Updated the THANKS file.
  • Planned Deprecation of RCPP_FUNCTION_*:
    • The set of macros RCPP_FUNCTION_ etc ... from the preprocessor_generated.h file will be deprecated in the next version of Rcpp, i.e they will still be available but will generate some warning in addition to their expected behavior.
    • In the first release that is at least 12 months after this announcement, the macros will be removed from Rcpp.
    • Users of these macros (if there are any) should start replacing them with more up to date code, such as using Rcpp attributes or Rcpp modules.
Thanks to CRANberries, you can also look at a diff to the previous release 0.10.3. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page

1 June 2013

Daniel Pocock: Democracy down-under is not democracy

This weekend we have the AGM of a local community organisation. Like many organisations I am involved in, there will be a democratic process of decision making, board election and a low-budget social activity afterwards. We take it for granted that democracy is a good thing. In my own country, Australia, people are supposedly happier than they've ever been, no doubt people will suggest that our democracy is part of the recipe for success. While this example is from Australia, it could well happen anywhere. While everybody was slapping themselves on the back about our officially confirmed contentment, the politicians tried to slip something under the radar. With elections expected in September, the press exposed a secret deal between the two biggest political parties, raiding the public piggy bank for $60 million to prop up their campaign accounts. There is even a leaked copy of a letter confirming the deal. It was to be voted through parliament in a stitch-up within 48 hours after the happiness' announcement. Why would these two big political parties engage in a such a gross conspiracy? Weren't they already content with their whopping big pay increases that put our Prime Minister on a bigger salary than the US president or the UK Prime Minister? Well, you don't have to look hard to find out what this special funding was all about: Not long ago, the post office at the University of Melbourne where Wikileaks operates a post-office box was mysteriously shut down. While that may seem like it could just be a co-incidence on it's own, it's worth considering in the wider context: the Wikileaks party is one of the most widely recognised names in Australian politics right now. The party's leader, like the new pope, is seen as somebody who puts his principles ahead of his own comfort, living a humble life in exile while our politicians romp around with prostitutes paid for with stolen money. Whatever you think of Wikileaks or Mr Assange's private life, they are not the only example here. There are other democratic movements in our country that are equally frightening for those who are currently drunk on power. One of the independent MPs holding the balance of power is a former Lieutenant Colonel in Australia's spy agency who was ridiculed by a prior Government and forced out of his job when he exposed the sham of the Iraq war. Neither of the major political parties wants to continue being held accountable by someone who has shown such strong principles against their campaign of death and deception. That $60 million welfare payment to big political parties was intended to be something akin to a weapon of mass destruction, obliterating independent representatives out of the parliament. More recently, the same independent MP has been equally vigorous in his campagin to break the scourge of our mafia-style gambling industry with it's cosy links to Australian politicians. Now it is starting to become obvious just how scary democracy can be. Motivated by the spectacle of a few independents holding our incumbent politicians to account, other Australians have also volunteered to get in on the act and try their hand at running the country. One of Australia's richest men, Clive Palmer has found that even with his enormous wealth (and having started planning more than a year before the election), his plans to form a political party are dampened by the fact that it can't be registered with a proper name to be listed on the ballot paper and all the candidates have to be listed under his name, Palmer, or their own names, barely distinguishable from the independent candidates. This discriminatory approach to the creation of political parties clearly favours the two big incumbent groups. Now it is a lot clearer why existing politicians needed an extra $60 million war chest: like Lance Armstrong's illegal doping program, it was intended to keep themselves ahead of the pack. It all goes to show that people should not take democracy for granted: constant vigilence and involvement is needed to hold leaders to account or replace them when they deviate.
AttachmentSize
piggy-bank.jpg32.83 KB

Next.

Previous.